skip to main content


Search for: All records

Creators/Authors contains: "Søgaard, A."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    IceCube alert events are neutrinos with a moderate-to-high probability of having astrophysical origin. In this study, we analyze 11 yr of IceCube data and investigate 122 alert events and a selection of high-energy tracks detected between 2009 and the end of 2021. This high-energy event selection (alert events + high-energy tracks) has an average probability of ≥0.5 of being of astrophysical origin. We search for additional continuous and transient neutrino emission within the high-energy events’ error regions. We find no evidence for significant continuous neutrino emission from any of the alert event directions. The only locally significant neutrino emission is the transient emission associated with the blazar TXS 0506+056, with a local significance of 3σ, which confirms previous IceCube studies. When correcting for 122 test positions, the globalp-value is 0.156 and compatible with the background hypothesis. We constrain the total continuous flux emitted from all 122 test positions at 100 TeV to be below 1.2 × 10−15(TeV cm2s)−1at 90% confidence assuming anE−2spectrum. This corresponds to 4.5% of IceCube’s astrophysical diffuse flux. Overall, we find no indication that alert events in general are linked to lower-energetic continuous or transient neutrino emission.

     
    more » « less
  2. Abstract Core-collapse supernovae are a promising potential high-energy neutrino source class. We test for correlation between seven years of IceCube neutrino data and a catalog containing more than 1000 core-collapse supernovae of types IIn and IIP and a sample of stripped-envelope supernovae. We search both for neutrino emission from individual supernovae as well as for combined emission from the whole supernova sample, through a stacking analysis. No significant spatial or temporal correlation of neutrinos with the cataloged supernovae was found. All scenarios were tested against the background expectation and together yield an overall p -value of 93%; therefore, they show consistency with the background only. The derived upper limits on the total energy emitted in neutrinos are 1.7 × 10 48 erg for stripped-envelope supernovae, 2.8 × 10 48 erg for type IIP, and 1.3 × 10 49 erg for type IIn SNe, the latter disfavoring models with optimistic assumptions for neutrino production in interacting supernovae. We conclude that stripped-envelope supernovae and supernovae of type IIn do not contribute more than 14.6% and 33.9%, respectively, to the diffuse neutrino flux in the energy range of about [ 10 3 –10 5 ] GeV, assuming that the neutrino energy spectrum follows a power-law with an index of −2.5. Under the same assumption, we can only constrain the contribution of type IIP SNe to no more than 59.9%. Thus, core-collapse supernovae of types IIn and stripped-envelope supernovae can both be ruled out as the dominant source of the diffuse neutrino flux under the given assumptions. 
    more » « less
    Free, publicly-accessible full text available May 1, 2024
  3. Abstract The D-Egg, an acronym for “Dual optical sensors in an Ellipsoid Glass for Gen2,” is one of the optical modules designed for future extensions of the IceCube experiment at the South Pole. The D-Egg has an elongated-sphere shape to maximize the photon-sensitive effective area while maintaining a narrow diameter to reduce the cost and the time needed for drilling of the deployment holes in the glacial ice for the optical modules at depths up to 2700 m. The D-Egg design is utilized for the IceCube Upgrade, the next stage of the IceCube project also known as IceCube-Gen2 Phase 1, where nearly half of the optical sensors to be deployed are D-Eggs. With two 8-inch high-quantum efficiency photomultiplier tubes (PMTs) per module, D-Eggs offer an increased effective area while retaining the successful design of the IceCube digital optical module (DOM). The convolution of the wavelength-dependent effective area and the Cherenkov emission spectrum provides an effective photodetection sensitivity that is 2.8 times larger than that of IceCube DOMs. The signal of each of the two PMTs is digitized using ultra-low-power 14-bit analog-to-digital converters with a sampling frequency of 240 MSPS, enabling a flexible event triggering, as well as seamless and lossless event recording of single-photon signals to multi-photons exceeding 200 photoelectrons within 10 ns. Mass production of D-Eggs has been completed, with 277 out of the 310 D-Eggs produced to be used in the IceCube Upgrade. In this paper, we report the design of the D-Eggs, as well as the sensitivity and the single to multi-photon detection performance of mass-produced D-Eggs measured in a laboratory using the built-in data acquisition system in each D-Egg optical sensor module. 
    more » « less
  4. Abstract This paper presents the results of a search for neutrinos that are spatially and temporally coincident with 22 unique, nonrepeating fast radio bursts (FRBs) and one repeating FRB (FRB 121102). FRBs are a rapidly growing class of Galactic and extragalactic astrophysical objects that are considered a potential source of high-energy neutrinos. The IceCube Neutrino Observatory’s previous FRB analyses have solely used track events. This search utilizes seven years of IceCube cascade events which are statistically independent of track events. This event selection allows probing of a longer range of extended timescales due to the low background rate. No statistically significant clustering of neutrinos was observed. Upper limits are set on the time-integrated neutrino flux emitted by FRBs for a range of extended time windows. 
    more » « less
  5. Abstract Gamma-ray bursts (GRBs) have long been considered a possible source of high-energy neutrinos. While no correlations have yet been detected between high-energy neutrinos and GRBs, the recent observation of GRB 221009A—the brightest GRB observed by Fermi-GBM to date and the first one to be observed above an energy of 10 TeV—provides a unique opportunity to test for hadronic emission. In this paper, we leverage the wide energy range of the IceCube Neutrino Observatory to search for neutrinos from GRB 221009A. We find no significant deviation from background expectation across event samples ranging from MeV to PeV energies, placing stringent upper limits on the neutrino emission from this source. 
    more » « less
  6. Abstract Galactic PeV cosmic-ray accelerators (PeVatrons) are Galactic sources theorized to accelerate cosmic rays up to PeV in energy. The accelerated cosmic rays are expected to interact hadronically with nearby ambient gas or the interstellar medium, resulting in γ -rays and neutrinos. Recently, the Large High Altitude Air Shower Observatory (LHAASO) identified 12 γ -ray sources with emissions above 100 TeV, making them candidates for PeVatrons. While at these high energies the Klein–Nishina effect exponentially suppresses leptonic emission from Galactic sources, evidence for neutrino emission would unequivocally confirm hadronic acceleration. Here, we present the results of a search for neutrinos from these γ -ray sources and stacking searches testing for excess neutrino emission from all 12 sources as well as their subcatalogs of supernova remnants and pulsar wind nebulae with 11 yr of track events from the IceCube Neutrino Observatory. No significant emissions were found. Based on the resulting limits, we place constraints on the fraction of γ -ray flux originating from the hadronic processes in the Crab Nebula and LHAASO J2226+6057. 
    more » « less
  7. Abstract IceCube, a cubic-kilometer array of optical sensors built to detect atmospheric and astrophysical neutrinos between 1 GeV and 1 PeV, is deployed 1.45 km to 2.45 km below the surface of the ice sheet at the South Pole. The classification and reconstruction of events from the in-ice detectors play a central role in the analysis of data from IceCube. Reconstructing and classifying events is a challenge due to the irregular detector geometry, inhomogeneous scattering and absorption of light in the ice and, below 100 GeV, the relatively low number of signal photons produced per event. To address this challenge, it is possible to represent IceCube events as point cloud graphs and use a Graph Neural Network (GNN) as the classification and reconstruction method. The GNN is capable of distinguishing neutrino events from cosmic-ray backgrounds, classifying different neutrino event types, and reconstructing the deposited energy, direction and interaction vertex. Based on simulation, we provide a comparison in the 1 GeV–100 GeV energy range to the current state-of-the-art maximum likelihood techniques used in current IceCube analyses, including the effects of known systematic uncertainties. For neutrino event classification, the GNN increases the signal efficiency by 18% at a fixed background rate, compared to current IceCube methods. Alternatively, the GNN offers a reduction of the background (i.e. false positive) rate by over a factor 8 (to below half a percent) at a fixed signal efficiency. For the reconstruction of energy, direction, and interaction vertex, the resolution improves by an average of 13%–20% compared to current maximum likelihood techniques in the energy range of 1 GeV–30 GeV. The GNN, when run on a GPU, is capable of processing IceCube events at a rate nearly double of the median IceCube trigger rate of 2.7 kHz, which opens the possibility of using low energy neutrinos in online searches for transient events. 
    more » « less
  8. Abstract The accurate simulation of additional interactions at the ATLAS experiment for the analysis of proton–proton collisions delivered by the Large Hadron Collider presents a significant challenge to the computing resources. During the LHC Run 2 (2015–2018), there were up to 70 inelastic interactions per bunch crossing, which need to be accounted for in Monte Carlo (MC) production. In this document, a new method to account for these additional interactions in the simulation chain is described. Instead of sampling the inelastic interactions and adding their energy deposits to a hard-scatter interaction one-by-one, the inelastic interactions are presampled, independent of the hard scatter, and stored as combined events. Consequently, for each hard-scatter interaction, only one such presampled event needs to be added as part of the simulation chain. For the Run 2 simulation chain, with an average of 35 interactions per bunch crossing, this new method provides a substantial reduction in MC production CPU needs of around 20%, while reproducing the properties of the reconstructed quantities relevant for physics analyses with good accuracy. 
    more » « less
  9. Abstract During LHC Run 2 (2015–2018) the ATLAS Level-1 topological trigger allowed efficient data-taking by the ATLAS experiment at luminosities up to 2.1 $$\times $$ × 10 $$^{34}$$ 34  cm $$^{-2}$$ - 2 s $$^{-1}$$ - 1 , which exceeds the design value by a factor of two. The system was installed in 2016 and operated in 2017 and 2018. It uses Field Programmable Gate Array processors to select interesting events by placing kinematic and angular requirements on electromagnetic clusters, jets, $$\tau $$ τ -leptons, muons and the missing transverse energy. It allowed to significantly improve the background event rejection and signal event acceptance, in particular for Higgs and B -physics processes. 
    more » « less
  10. Abstract Several improvements to the ATLAS triggers used to identify jets containing b -hadrons ( b -jets) were implemented for data-taking during Run 2 of the Large Hadron Collider from 2016 to 2018. These changes include reconfiguring the b -jet trigger software to improve primary-vertex finding and allow more stable running in conditions with high pile-up, and the implementation of the functionality needed to run sophisticated taggers used by the offline reconstruction in an online environment. These improvements yielded an order of magnitude better light-flavour jet rejection for the same b -jet identification efficiency compared to the performance in Run 1 (2011–2012). The efficiency to identify b -jets in the trigger, and the conditional efficiency for b -jets that satisfy offline b -tagging requirements to pass the trigger are also measured. Correction factors are derived to calibrate the b -tagging efficiency in simulation to match that observed in data. The associated systematic uncertainties are substantially smaller than in previous measurements. In addition, b -jet triggers were operated for the first time during heavy-ion data-taking, using dedicated triggers that were developed to identify semileptonic b -hadron decays by selecting events with geometrically overlapping muons and jets. 
    more » « less